1,505 research outputs found

    Digital Advertising and News: Who Advertises on News Sites and How Much Those Ads Are Targeted

    Get PDF
    Analyzes trends in advertising in twenty-two news operations, including shifts to digital advertising, use of consumer data to target ads, types of ads, and industries represented among advertisers by media type

    Understanding the Participatory News Consumer

    Get PDF
    Analyzes survey findings on the impact of social media and mobile connectivity on news consumption behavior by demographics and political affiliation. Examines sources; topics; participation by sharing, commenting on, or creating news; and views on media

    Collimated Whole Volume Light Scattering in Homogeneous Finite Media

    Get PDF
    Crepuscular rays form when light encounters an optically thick or opaque medium which masks out portions of the visible scene. Real-time applications commonly estimate this phenomena by connecting paths between light sources and the camera after a single scattering event. We provide a set of algorithms for solving integration and sampling of single-scattered collimated light in a box-shaped medium and show how they extend to multiple scattering and convex media. First, a method for exactly integrating the unoccluded single scattering in rectilinear box-shaped medium is proposed and paired with a ratio estimator and moment-based approximation. Compared to previous methods, it requires only a single sample in unoccluded areas to compute the whole integral solution and provides greater convergence in the rest of the scene. Second, we derive an importance sampling scheme accounting for the entire geometry of the medium. This sampling strategy is then incorporated in an optimized Monte Carlo integration. The resulting integration scheme yields visible noise reduction and it is directly applicable to indoor scene rendering in room-scale interactive experiences. Furthermore, it extends to multiple light sources and achieves superior converge compared to independent sampling with existing algorithms. We validate our techniques against previous methods based on ray marching and distance sampling to prove their superior noise reduction capability

    Improving VIP viewer Gaze Estimation and Engagement Using Adaptive Dynamic Anamorphosis

    Get PDF
    Anamorphosis for 2D displays can provide viewer centric perspective viewing, enabling 3D appearance, eye contact and engagement, by adapting dynamically in real time to a single moving viewer’s viewpoint, but at the cost of distorted viewing for other viewers. We present a method for constructing non-linear projections as a combination of anamorphic rendering of selective objects whilst reverting to normal perspective rendering of the rest of the scene. Our study defines a scene consisting of five characters, with one of these characters selectively rendered in anamorphic perspective. We conducted an evaluation experiment and demonstrate that the tracked viewer-centric imagery for the selected character results in an improved gaze and engagement estimation. Critically, this is performed without sacrificing the other viewers’ viewing experience. In addition, we present findings on the perception of gaze direction for regularly viewed characters located off-center to the origin, where perceived gaze shifts from being aligned to misalignment increasingly as the distance between viewer and character increases. Finally, we discuss different viewpoints and the spatial relationship between objects

    Intermediated Reality: A Framework for Communication Through Tele-Puppetry

    Get PDF
    We introduce Intermediated Reality (IR), a framework for intermediated communication enabling collaboration through remote possession of entities (e.g., toys) that come to life in mobile Mediated Reality (MR). As part of a two-way conversation, each person communicates through a toy figurine that is remotely located in front of the other participant. Each person's face is tracked through the front camera of their mobile devices and the tracking pose information is transmitted to the remote participant's device along with the synchronized captured voice audio, allowing a turn-based interactive avatar chat session, which we have called ToyMeet. By altering the camera video feed with a reconstructed appearance of the object in a deformed pose, we perform the illusion of movement in real-world objects to realize collaborative tele-present augmented reality (AR). In this turn based interaction, each participant first sees their own captured puppetry message locally with their device's front facing camera. Next, they receive a view of their counterpart's captured response locally (in AR) with seamless visual deformation of their local 3D toy seen through their device's rear facing camera. We detail optimization of the animation transmission and switching between devices with minimized latency for coherent smooth chat interaction. An evaluation of rendering performance and system latency is included. As an additional demonstration of our framework, we generate facial animation frames for 3D printed stop motion in collaborative mixed reality. This allows a reduction in printing costs since the in-between frames of key poses can be generated digitally with shared remote review

    Empowerment and embodiment for collaborative mixed reality systems: Empowerment and Embodiment

    Get PDF
    We present several mixed‐reality‐based remote collaboration settings by using consumer head‐mounted displays. We investigated how two people are able to work together in these settings. We found that the person in the AR system will be regarded as the “leader” (i.e., they provide a greater contribution to the collaboration), whereas no similar “leader” emerges in augmented reality (AR)‐to‐AR and AR‐to‐VRBody settings. We also found that these special patterns of leadership only emerged for 3D interactions and not for 2D interactions. Results about the participants' experience of leadership, collaboration, embodiment, presence, and copresence shed further light on these findings

    Photo-Realistic Facial Details Synthesis from Single Image

    Full text link
    We present a single-image 3D face synthesis technique that can handle challenging facial expressions while recovering fine geometric details. Our technique employs expression analysis for proxy face geometry generation and combines supervised and unsupervised learning for facial detail synthesis. On proxy generation, we conduct emotion prediction to determine a new expression-informed proxy. On detail synthesis, we present a Deep Facial Detail Net (DFDN) based on Conditional Generative Adversarial Net (CGAN) that employs both geometry and appearance loss functions. For geometry, we capture 366 high-quality 3D scans from 122 different subjects under 3 facial expressions. For appearance, we use additional 20K in-the-wild face images and apply image-based rendering to accommodate lighting variations. Comprehensive experiments demonstrate that our framework can produce high-quality 3D faces with realistic details under challenging facial expressions

    Deep Precomputed Radiance Transfer for Deformable Objects

    Get PDF
    We propose, DeepPRT, a deep convolutional neural network to compactly encapsulate the radiance transfer of a freely deformable object for rasterization in real-time. With pre-computation of radiance transfer (PRT) we can store complex light interactions appropriate to the shape of a given object at each surface point for subsequent real-time rendering via fast linear algebra evaluation against the viewing direction and distant light environment. However, performing light transport projection into an efficient basis representation, such as Spherical Harmonics (SH), requires a numerical Monte Carlo integration computation, limiting usage to rigid only objects or highly constrained deformation sequences. The bottleneck, when considering freely deformable objects, is the heavy memory requirement to wield all pre-computations in rendering with global illumination results.We present a compact representation of PRT for deformable objects with fixed memory consumption, which solves diverse non-linear deformations and is shown to be effective beyond the input training set. Specifically, a U-Net is trained to predict the coefficients of the transfer function (SH coefficients in this case), for a given animation's shape query each frame in real-time.We contribute deep learning of PRT within a parametric surface space representation via geometry images using harmonic mapping with a texture space filling energy minimization variant. This surface representation facilitates the learning procedure, removing irrelevant, deformation invariant information; and supports standard convolution operations. Finally, comparisons with ground truth and a recent linear morphable-model method is provided

    Real time variable rigidity texture mapping

    Get PDF
    • 

    corecore